Towards human linguistic machine translation evaluation
نویسندگان
چکیده
منابع مشابه
Integrating Linguistic Information in Machine Translation Evaluation
The automatic evaluation of machine translation (MT) has been a very important factor driving the success of statistical machine translation for most of this decade. Prior to automatic metrics, researchers were forced to rely more heavily on human evaluations, which are costly and time-consuming. Automatic metrics allow systems to analyze and reduce errors while they train. Fully automatic mach...
متن کاملThe Contribution of Linguistic Features to Automatic Machine Translation Evaluation
A number of approaches to Automatic MT Evaluation based on deep linguistic knowledge have been suggested. However, n-gram based metrics are still today the dominant approach. The main reason is that the advantages of employing deeper linguistic information have not been clarified yet. In this work, we propose a novel approach for meta-evaluation of MT evaluation metrics, since correlation coffi...
متن کاملLinguistic-based Evaluation Criteria to identify Statistical Machine Translation Errors
Machine translation evaluation methods are highly necessary in order to analyze the performance of translation systems. Up to now, the most traditional methods are the use of automatic measures such as BLEU or the quality perception performed by native human evaluations. In order to complement these traditional procedures, the current paper presents a new human evaluation based on the expert kn...
متن کاملLinguistic Bases for Machine Translation
Researchers in MT do not work with linguistic theories which are 'on vogue' today. The two special issues on MT of the journal Computational Linguistics (CL 1985) contain eight contributions of the leading teams. In the bibliography of these articles you don't find names like Chomsky, Montague, Bresnan, Gazdar, Kamp, Barwise, Perry etc.[2] Syntactic theories like GB, GPSG, LFG are not mentioned...
متن کاملMethods for human evaluation of machine translation
Evaluation of machine translation (MT) is a difficult task, both for humans, and using automatic metrics. The main difficulty lies in the fact that there is not one single correct translation, but many alternative good translation options. MT systems are often evaluated using automatic metrics, which commonly rely on comparing a translation to only a single human reference translation. An alter...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Digital Scholarship in the Humanities
سال: 2013
ISSN: 2055-7671,2055-768X
DOI: 10.1093/llc/fqt065